Learning Log-Determinant Divergences for Positive Definite Matrices

نویسندگان
چکیده

برای دانلود باید عضویت طلایی داشته باشید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Learning Discriminative αβ-Divergences for Positive Definite Matrices

Symmetric positive definite (SPD) matrices are useful for capturing second-order statistics of visual data. To compare two SPD matrices, several measures are available, such as the affine-invariant Riemannian metric, Jeffreys divergence, Jensen-Bregman logdet divergence, etc.; however, their behaviors may be application dependent, raising the need of manual selection to achieve the best possibl...

متن کامل

A New Determinant Inequality of Positive Semi-Definite Matrices

A new determinant inequality of positive semidefinite matrices is discovered and proved by us. This new inequality is useful for attacking and solving a variety of optimization problems arising from the design of wireless communication systems. I. A NEW DETERMINANT INEQUALITY The following notations are used throughout this article. The notations [·] and [·] stand for transpose and Hermitian tr...

متن کامل

Infinite-dimensional Log-Determinant divergences II: Alpha-Beta divergences

This work presents a parametrized family of divergences, namely Alpha-Beta LogDeterminant (Log-Det) divergences, between positive definite unitized trace class operators on a Hilbert space. This is a generalization of the Alpha-Beta Log-Determinant divergences between symmetric, positive definite matrices to the infinite-dimensional setting. The family of Alpha-Beta Log-Det divergences is highl...

متن کامل

Supervised LogEuclidean Metric Learning for Symmetric Positive Definite Matrices

Metric learning has been shown to be highly effective to improve the performance of nearest neighbor classification. In this paper, we address the problem of metric learning for symmetric positive definite (SPD) matrices such as covariance matrices, which arise in many real-world applications. Naively using standard Mahalanobis metric learning methods under the Euclidean geometry for SPD matric...

متن کامل

Riemannian Metric Learning for Symmetric Positive Definite Matrices

Over the past few years, symmetric positive definite matrices (SPD) have been receiving considerable attention from computer vision community. Though various distance measures have been proposed in the past for comparing SPD matrices, the two most widely-used measures are affine-invariant distance and log-Euclidean distance. This is because these two measures are true geodesic distances induced...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

ژورنال

عنوان ژورنال: IEEE Transactions on Pattern Analysis and Machine Intelligence

سال: 2021

ISSN: 0162-8828,2160-9292,1939-3539

DOI: 10.1109/tpami.2021.3073588